Tolerating process variations in large, set-associative caches
نویسندگان
چکیده
منابع مشابه
Improving Replacement Decisions in Set-Associative Caches
Cache replacement policies play a key role in determining hit rates in set-associative caches. Cache replacement algorithms use runtime trace history and most a least recently used (LRU) policy, and neither programmers nor compilers can explicitly control cache replacement. This paper describes a novel mechanism to improve cache replacement decisions without the hardware costs of higher set-ass...
متن کاملTiming Analysis for Data Caches and Set-Associative Caches
The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The given approach works on fully optimized code, performs the analysis over the entire control ow of a program, detects and exploits both spatial and temporal locality within data references, produces results typically within a few seconds, and estimates...
متن کاملGeneralizing Timing Predictions to Set-Associative Caches
Hard real-time systems rely on the assumption that the deadlines of tasks can be met { otherwise the safety of the controlled system is jeopardized. Several scheduling paradigms have been developed to support the analysis of a task set and determine if a schedule is feasible. These scheduling paradigms rely on the assumption that the worst-case execution time (WCET) of hard real-time tasks be k...
متن کاملAccessing Multiple Sequences Through Set Associative Caches
The cache hierarchy prevalent in todays high performance processors has to be taken into account in order to design algorithms which perform well in practice. We start from the empirical observation that external memory algorithms often turn out to be good algorithms for cached memory. This is not self evident since caches have a xed and quite restrictive algorithm choosing the content of the c...
متن کاملDirect-mapped versus set-associative pipelined caches
As the tag check may be executed in a speciic pipeline stage, cache pipelining allows to reach the same processor cycle time with a set-associative cache or a direct-mapped cache. On a direct-mapped cache, the data or the instruction owing out from the cache may be used in parallel with the tag check. When using a pipelined cache, such an optimistic execution results in load and branch delays o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM Transactions on Architecture and Code Optimization
سال: 2009
ISSN: 1544-3566,1544-3973
DOI: 10.1145/1543753.1543757